13 research outputs found

    Analytical Studies of Fragmented-Spectrum Multi-Level OFDM-CDMA Technique in Cognitive Radio Networks

    Full text link
    In this paper, we present a multi-user resource allocation framework using fragmented-spectrum synchronous OFDM-CDMA modulation over a frequency-selective fading channel. In particular, given pre-existing communications in the spectrum where the system is operating, a channel sensing and estimation method is used to obtain information of subcarrier availability. Given this information, some real-valued multi-level orthogonal codes, which are orthogonal codes with values of {±1,±2,±3,±4,...}\{\pm1,\pm2,\pm3,\pm4, ... \}, are provided for emerging new users, i.e., cognitive radio users. Additionally, we have obtained a closed form expression for bit error rate of cognitive radio receivers in terms of detection probability of primary users, CR users' sensing time and CR users' signal to noise ratio. Moreover, simulation results obtained in this paper indicate the precision with which the analytical results have been obtained in modeling the aforementioned system.Comment: 6 pages and 3 figure

    Lexicographically Fair Learning: Algorithms and Generalization

    Get PDF
    We extend the notion of minimax fairness in supervised learning problems to its natural conclusion: lexicographic minimax fairness (or lexifairness for short). Informally, given a collection of demographic groups of interest, minimax fairness asks that the error of the group with the highest error be minimized. Lexifairness goes further and asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized, and amongst all of those solutions, the error of the group with the third highest error should be minimized, and so on. Despite its naturalness, correctly defining lexifairness is considerably more subtle than minimax fairness, because of inherent sensitivity to approximation error. We give a notion of approximate lexifairness that avoids this issue, and then derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting. When the underlying empirical risk minimization problem absent fairness constraints is convex (as it is, for example, with linear and logistic regression), our algorithms are provably efficient even in the worst case. Finally, we show generalization bounds - approximate lexifairness on the training sample implies approximate lexifairness on the true distribution with high probability. Our ability to prove generalization bounds depends on our choosing definitions that avoid the instability of naive definitions

    Ethical Machine Learning: Fairness, Privacy, and the Right to Be Forgotten

    No full text
    Large-scale algorithmic decision making has increasingly run afoul of various social norms, laws, and regulations. A prominent concern is when a learned model exhibits discrimination against some demographic group, perhaps based on race or gender. Concerns over such algorithmic discrimination have led to a recent flurry of research on fairness in machine learning, which includes new tools for designing fair models, and studies the tradeoffs between predictive accuracy and fairness. We address algorithmic challenges in this domain. Preserving privacy of data when performing analysis on it is not only a basic right for users, but it is also required by laws and regulations. How should one preserve privacy? After about two decades of fruitful research in this domain, differential privacy (DP) is considered by many the gold standard notion of data privacy. We focus on how differential privacy can be useful beyond preserving data privacy. In particular, we study the connection between differential privacy and adaptive data analysis. Users voluntarily provide huge amounts of personal data to businesses such as Facebook, Google, and Amazon, in exchange for useful services. But a basic principle of data autonomy asserts that users should be able to revoke access to their data if they no longer find the exchange of data for services worthwhile. The right for users to request the erasure of personal data appears in regulations such as the Right to be Forgotten of General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA). We provide algorithmic solutions to the the problem of removing the influence of data points from machine learning models
    corecore